空间和地理数据的表示学习是一种快速开发的领域,其允许使用深神经网络的区域和高质量推断之间的相似性检测。然而,过去的方法集中在嵌入光栅图像(地图,街道或卫星照片),移动数据或道路网络上。在本文中,我们提出了第一种关于在微区网格中的城市功能和土地利用的开放式车间地区的传染媒介表示的第一种方法。我们确定与土地使用,建筑和城市地区功能,水,绿色或其他自然区域的主要特征相关的OSM标签的子集。通过手动验证标记质量,我们选择了36个城市用于培训区域的陈述。优步的H3索引用于将城市划分为六边形,而OSM标签为每个六角形汇总。我们提出了基于负采样的跳过克模型的Hex2VEC方法。由此产生的矢量表示展示了地图特征的语义结构,类似于基于向量的语言模型中的存在。我们还在六个波兰城市中从区域相似性检测的见解,并提出了通过附聚类获得的区域类型。
translated by 谷歌翻译
我们选择了48个欧洲城市,并以GTFS格式聚集了公共交通时间表。我们利用优步的H3空间指数将每个城市划分为六角形微区域。基于时间表数据,我们创建了某些功能,描述了每个区域中的公共交通可用性的数量和各种功能。接下来,我们培训了一个自动关联的深神经网络来嵌入每个区域。具有这样的准备的表示,我们使用分层聚类方法来识别类似地区。为此,我们利用了一个附着的聚类算法,在地区和病房的方法之间具有欧几里德距离,以最小化簇内方差。最后,我们在不同级别分析了所获得的集群,以确定定性描述公共交通可用性的一些群集。我们认为,我们的类型与分析的城市的特征匹配,并允许成功寻找具有相似公共交通计划特征的地区。
translated by 谷歌翻译
Deep spiking neural networks (SNNs) offer the promise of low-power artificial intelligence. However, training deep SNNs from scratch or converting deep artificial neural networks to SNNs without loss of performance has been a challenge. Here we propose an exact mapping from a network with Rectified Linear Units (ReLUs) to an SNN that fires exactly one spike per neuron. For our constructive proof, we assume that an arbitrary multi-layer ReLU network with or without convolutional layers, batch normalization and max pooling layers was trained to high performance on some training set. Furthermore, we assume that we have access to a representative example of input data used during training and to the exact parameters (weights and biases) of the trained ReLU network. The mapping from deep ReLU networks to SNNs causes zero percent drop in accuracy on CIFAR10, CIFAR100 and the ImageNet-like data sets Places365 and PASS. More generally our work shows that an arbitrary deep ReLU network can be replaced by an energy-efficient single-spike neural network without any loss of performance.
translated by 谷歌翻译
Graph Neural Networks (GNNs) are a family of graph networks inspired by mechanisms existing between nodes on a graph. In recent years there has been an increased interest in GNN and their derivatives, i.e., Graph Attention Networks (GAT), Graph Convolutional Networks (GCN), and Graph Recurrent Networks (GRN). An increase in their usability in computer vision is also observed. The number of GNN applications in this field continues to expand; it includes video analysis and understanding, action and behavior recognition, computational photography, image and video synthesis from zero or few shots, and many more. This contribution aims to collect papers published about GNN-based approaches towards computer vision. They are described and summarized from three perspectives. Firstly, we investigate the architectures of Graph Neural Networks and their derivatives used in this area to provide accurate and explainable recommendations for the ensuing investigations. As for the other aspect, we also present datasets used in these works. Finally, using graph analysis, we also examine relations between GNN-based studies in computer vision and potential sources of inspiration identified outside of this field.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Sparse modelling or model selection with categorical data is challenging even for a moderate number of variables, because one parameter is roughly needed to encode one category or level. The Group Lasso is a well known efficient algorithm for selection continuous or categorical variables, but all estimates related to a selected factor usually differ. Therefore, a fitted model may not be sparse, which makes the model interpretation difficult. To obtain a sparse solution of the Group Lasso we propose the following two-step procedure: first, we reduce data dimensionality using the Group Lasso; then to choose the final model we use an information criterion on a small family of models prepared by clustering levels of individual factors. We investigate selection correctness of the algorithm in a sparse high-dimensional scenario. We also test our method on synthetic as well as real datasets and show that it performs better than the state of the art algorithms with respect to the prediction accuracy or model dimension.
translated by 谷歌翻译
视觉奇数任务被认为是对人类的普遍独立的分析智能测试。人工智能的进步导致了重要的突破,但是与人类在此类分析智能任务上竞争仍然具有挑战性,并且通常诉诸于非生物学上的架构。我们提出了一个具有生物学现实的系统,该系统从合成眼动运动中接收输入 - 扫视,并与结合新皮质神经元动力学的神经元一起处理它们。我们介绍了一个程序生成的视觉奇数数据集,以训练扩展常规关系网络和我们建议的系统的体系结构。两种方法都超过了人类的准确性,我们发现两者都具有相同的基本推理基本机制。最后,我们表明,具有生物学启发的网络可实现卓越的准确性,学习速度更快,所需的参数比常规网络更少。
translated by 谷歌翻译
我们提出了三种新型的修剪技术,以提高推理意识到的可区分神经结构搜索(DNAS)的成本和结果。首先,我们介绍了DNA的随机双路构建块,它可以通过内存和计算复杂性在内部隐藏尺寸上进行搜索。其次,我们在搜索过程中提出了一种在超级网的随机层中修剪块的算法。第三,我们描述了一种在搜索过程中修剪不必要的随机层的新技术。由搜索产生的优化模型称为Prunet,并在Imagenet Top-1图像分类精度的推理潜伏期中为NVIDIA V100建立了新的最先进的Pareto边界。将Prunet作为骨架还优于COCO对象检测任务的GPUNET和EFIDENENET,相对于平均平均精度(MAP)。
translated by 谷歌翻译
心脏磁共振成像通常用于评估心脏解剖结构和功能。左心室血池和左心室心肌的描述对于诊断心脏疾病很重要。不幸的是,在CMR采集程序中,患者的运动可能会导致最终图像中出现的运动伪像。这种伪像降低了CMR图像的诊断质量和对程序的重做。在本文中,我们提出了一个多任务SWIN UNET变压器网络,用于在CMRXMOTION挑战中同时解决两个任务:CMR分割和运动伪像分类。我们将细分和分类作为多任务学习方法,使我们能够确定CMR的诊断质量并同时生成口罩。 CMR图像分为三个诊断质量类别,而所有具有非严重运动伪像的样本都被分割。使用5倍交叉验证训练的五个网络的合奏实现了骰子系数为0.871的分割性能,分类精度为0.595。
translated by 谷歌翻译
通过磁共振成像(MRI)评估肿瘤负担对于评估胶质母细胞瘤的治疗反应至关重要。由于疾病的高异质性和复杂性,该评估的性能很复杂,并且与高变异性相关。在这项工作中,我们解决了这个问题,并提出了一条深度学习管道,用于对胶质母细胞瘤患者进行全自动的端到端分析。我们的方法同时确定了肿瘤的子区域,包括第一步的肿瘤,周围肿瘤和手术腔,然后计算出遵循神经符号学(RANO)标准的当前响应评估的体积和双相测量。此外,我们引入了严格的手动注释过程,其随后是人类专家描绘肿瘤子区域的,并捕获其分割的信心,后来在训练深度学习模型时被使用。我们广泛的实验研究的结果超过了760次术前和504例从公共数据库获得的神经胶质瘤后患者(2021 - 2020年在19个地点获得)和临床治疗试验(47和69个地点,可用于公共数据库(在19个地点获得)(47和69个地点)术前/术后患者,2009-2011)并以彻底的定量,定性和统计分析进行了备份,表明我们的管道在手动描述时间的一部分中对术前和术后MRI进行了准确的分割(最高20比人更快。二维和体积测量与专家放射科医生非常吻合,我们表明RANO测量并不总是足以量化肿瘤负担。
translated by 谷歌翻译